The HSIC Bottleneck: Deep Learning without Back-Propagation

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Layer multiplexing FPGA implementation for deep back-propagation learning

Training of large scale neural networks, like those used nowadays in Deep Learning schemes, requires long computational times or the use of high performance computation solutions like those based on cluster computation, GPU boards, etc. As a possible alternative, in this work the Back-Propagation learning algorithm is implemented in an FPGA board using a multiplexing layer scheme, in which a si...

متن کامل

Back-Propagation Without Weight Transport

In back-propagation (Rumelhart et al, 1985) connection weights are used to both compute node activations and error gradients for hidden units. Grossberg (1987) has argued that the dual use of the same synaptic connections (“weight transport”) constitutes a bidirectional flow of information through synapses, which is biologically implausable. In this paper we formally and empirically demonstrate...

متن کامل

Developments to the Back - Propagation Learning Algorithm

The original back-propagation methods were plagued with variable parameters which affected both the convergence properties of the training and the generalisation abilities of the resulting network. These parameters presented many difficulties when attempting to use these networks to solve particular mapping problems. A combination of established numerical minimisation methods (Polak-Ribiere Con...

متن کامل

Learning Sparse Latent Representations with the Deep Copula Information Bottleneck

Deep latent variable models are powerful tools for representation learning. In this paper, we adopt the deep information bottleneck model, identify its shortcomings and propose a model that circumvents them. To this end, we apply a copula transformation which, by restoring the invariance properties of the information bottleneck method, leads to disentanglement of the features in the latent spac...

متن کامل

Scaling Relationships in Back-propagation Learning

A bstrac t. We present an empirical st udy of th e required training time for neural networks to learn to compute the parity function using the back -propagation learning algorithm, as a function of t he numb er of inp uts. The parity funct ion is a Boolean predica te whose order is equal to th e number of inpu t s. \Ve find t hat t he t rain ing time behaves roughly as 4" I where n is the num ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the AAAI Conference on Artificial Intelligence

سال: 2020

ISSN: 2374-3468,2159-5399

DOI: 10.1609/aaai.v34i04.5950